Goto

Collaborating Authors

 neural network learning


Stabilizing Dynamic Systems through Neural Network Learning: A Robust Approach

Zhang, Yu, Zhang, Haoyu, Zou, Yongxiang, Li, Houcheng, Cheng, Long

arXiv.org Artificial Intelligence

Point-to-point and periodic motions are ubiquitous in the world of robotics. To master these motions, Autonomous Dynamic System (DS) based algorithms are fundamental in the domain of Learning from Demonstration (LfD). However, these algorithms face the significant challenge of balancing precision in learning with the maintenance of system stability. This paper addresses this challenge by presenting a novel ADS algorithm that leverages neural network technology. The proposed algorithm is designed to distill essential knowledge from demonstration data, ensuring stability during the learning of both point-to-point and periodic motions. For point-to-point motions, a neural Lyapunov function is proposed to align with the provided demonstrations. In the case of periodic motions, the neural Lyapunov function is used with the transversal contraction to ensure that all generated motions converge to a stable limit cycle. The model utilizes a streamlined neural network architecture, adept at achieving dual objectives: optimizing learning accuracy while maintaining global stability. To thoroughly assess the efficacy of the proposed algorithm, rigorous evaluations are conducted using the LASA dataset and a manually designed dataset. These assessments were complemented by empirical validation through robotic experiments, providing robust evidence of the algorithm's performance


A Unified Kernel for Neural Network Learning

Zhang, Shao-Qun, Chen, Zong-Yi, Tian, Yong-Ming, Lu, Xun

arXiv.org Artificial Intelligence

Past decades have witnessed a great interest in the distinction and connection between neural network learning and kernel learning. Recent advancements have made theoretical progress in connecting infinite-wide neural networks and Gaussian processes. Two predominant approaches have emerged: the Neural Network Gaussian Process (NNGP) and the Neural Tangent Kernel (NTK). The former, rooted in Bayesian inference, represents a zero-order kernel, while the latter, grounded in the tangent space of gradient descents, is a first-order kernel. In this paper, we present the Unified Neural Kernel (UNK), which characterizes the learning dynamics of neural networks with gradient descents and parameter initialization. The proposed UNK kernel maintains the limiting properties of both NNGP and NTK, exhibiting behaviors akin to NTK with a finite learning step and converging to NNGP as the learning step approaches infinity. Besides, we also theoretically characterize the uniform tightness and learning convergence of the UNK kernel, providing comprehensive insights into this unified kernel. Experimental results underscore the effectiveness of our proposed method.


JetLOV: Enhancing Jet Tree Tagging through Neural Network Learning of Optimal LundNet Variables

Diaz, Mauricio A., Cerro, Giorgio, Chaplais, Jacan, Dasmahapatra, Srinandan, Moretti, Stefano

arXiv.org Artificial Intelligence

Machine learning has played a pivotal role in advancing physics, with deep learning notably contributing to solving complex classification problems such as jet tagging in the field of jet physics. In this experiment, we aim to harness the full potential of neural networks while acknowledging that, at times, we may lose sight of the underlying physics governing these models. Nevertheless, we demonstrate that we can achieve remarkable results obscuring physics knowledge and relying completely on the model's outcome. We introduce JetLOV, a composite comprising two models: a straightforward multilayer perceptron (MLP) and the well-established LundNet. Our study reveals that we can attain comparable jet tagging performance without relying on the pre-computed LundNet variables. Instead, we allow the network to autonomously learn an entirely new set of variables, devoid of a priori knowledge of the underlying physics. These findings hold promise, particularly in addressing the issue of model dependence, which can be mitigated through generalization and training on diverse data sets.


Large Neural Networks Learning from Scratch with Very Few Data and without Explicit Regularization

Linse, Christoph, Martinetz, Thomas

arXiv.org Artificial Intelligence

Recent findings have shown that highly over-parameterized Neural Networks generalize without pretraining or explicit regularization. It is achieved with zero training error, i.e., complete over-fitting by memorizing the training data. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that a randomly initialized VGG19 with 140 million weights learns to distinguish airplanes and motorbikes with up to 95% accuracy using only 20 training samples per class.



Neural Network Learning: Theoretical Foundations

Shawe-Taylor, John

AI Magazine

The scientific method aims to derive mathematical models that help us to understand and exploit phenomena, whether they be natural or human made. Machine learning, and more particularly learning with neural networks, can be viewed as just such a phenomenon. Frequently remarkable performance is obtained by training networks to perform relatively complex AI tasks. Despite this success, most practitioners would readily admit that they are far from fully understanding why and, more importantly, when the techniques can be expected to be effective. The need for a fuller theoretical analysis and understanding of their performance has been a major research objective for the last decade. Neural Network Learning: Theoretical Foundations reports on important developments that have been made toward this goal within the computational learning theory framework.